2,505 research outputs found

    Using Metrics Suites to Improve the Measurement of Privacy in Graphs

    Get PDF
    The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.Social graphs are widely used in research (e.g., epidemiology) and business (e.g., recommender systems). However, sharing these graphs poses privacy risks because they contain sensitive information about individuals. Graph anonymization techniques aim to protect individual users in a graph, while graph de-anonymization aims to re-identify users. The effectiveness of anonymization and de-anonymization algorithms is usually evaluated with privacy metrics. However, it is unclear how strong existing privacy metrics are when they are used in graph privacy. In this paper, we study 26 privacy metrics for graph anonymization and de-anonymization and evaluate their strength in terms of three criteria: monotonicity indicates whether the metric indicates lower privacy for stronger adversaries; for within-scenario comparisons, evenness indicates whether metric values are spread evenly; and for between-scenario comparisons, shared value range indicates whether metrics use a consistent value range across scenarios. Our extensive experiments indicate that no single metric fulfills all three criteria perfectly. We therefore use methods from multi-criteria decision analysis to aggregate multiple metrics in a metrics suite, and we show that these metrics suites improve monotonicity compared to the best individual metric. This important result enables more monotonic, and thus more accurate, evaluations of new graph anonymization and de-anonymization algorithms

    Privacy Policies Across the Ages: Content of Privacy Policies 1996-2021

    Get PDF
    It is well-known that most users do not read privacy policies, but almost always tick the box to agree with them. While the length and readability of privacy policies have been well studied, and many approaches for policy analysis based on natural language processing have been proposed, existing studies are limited in their depth and scope, often focusing on a small number of data practices at single point in time. In this paper, we fill this gap by analyzing the 25-year history of privacy policies using machine learning and natural language processing and presenting a comprehensive analysis of policy contents. Specifically, we collect a large-scale longitudinal corpus of privacy policies from 1996 to 2021 and analyze their content in terms of the data practices they describe, the rights they grant to users, and the rights they reserve for their organizations. We pay particular attention to changes in response to recent privacy regulations such as the GDPR and CCPA. We observe some positive changes, such as reductions in data collection post-GDPR, but also a range of concerning data practices, such as widespread implicit data collection for which users have no meaningful choices or access rights. Our work is an important step towards making privacy policies machine-readable on the user-side, which would help users match their privacy preferences against the policies offered by web services

    Measuring Privacy in Vehicular Networks

    Get PDF
    Vehicular communication plays a key role in near- future automotive transport, promising features like increased traffic safety or wireless software updates. However, vehicular communication can expose driver locations and thus poses important privacy risks. Many schemes have been proposed to protect privacy in vehicular communication, and their effectiveness is usually shown using privacy metrics. However, to the best of our knowledge, (1) different privacy metrics have never been compared to each other, and (2) it is unknown how strong the metrics are. In this paper, we argue that privacy metrics should be monotonic, i.e. that they indicate decreasing privacy for increasing adversary strength, and we evaluate the monotonicity of 32 privacy metrics on real and synthetic traffic with state-of- the-art adversary models. Our results indicate that most privacy metrics are weak at least in some situations. We therefore recommend to use metrics suites, i.e. combinations of privacy metrics, when evaluating new privacy-enhancing technologies

    On the Strength of Privacy Metrics for Vehicular Communication

    Get PDF
    open access articl

    Privacy in the Smart City - Applications, Technologies, Challenges and Solutions

    Get PDF
    Many modern cities strive to integrate information technology into every aspect of city life to create so-called smart cities. Smart cities rely on a large number of application areas and technologies to realize complex interactions between citizens, third parties, and city departments. This overwhelming complexity is one reason why holistic privacy protection only rarely enters the picture. A lack of privacy can result in discrimination and social sorting, creating a fundamentally unequal society. To prevent this, we believe that a better understanding of smart cities and their privacy implications is needed. We therefore systematize the application areas, enabling technologies, privacy types, attackers and data sources for the attacks, giving structure to the fuzzy term “smart city”. Based on our taxonomies, we describe existing privacy-enhancing technologies, review the state of the art in real cities around the world, and discuss promising future research directions. Our survey can serve as a reference guide, contributing to the development of privacy-friendly smart cities

    POSTER: Evaluating Privacy Metrics for Graph Anonymization and De-anonymization

    Get PDF
    Many modern communication systems generate graph data, for example social networks and email networks. Such graph data can be used for recommender systems and data mining. However, because graph data contains sensitive information about individuals, sharing or publishing graph data may pose privacy risks. To protect graph privacy, data anonymization has been proposed to prevent individual users in a graph from being identified by adversaries. The effectiveness of both anonymization and de-anonymization techniques is usually evaluated using the adversary’s success rate. However, the success rate does not measure privacy for individual users in a graph because it is an aggregate per-graph metric. In addition, it is unclear whether the success rate is monotonic, i.e. whether it indicates higher privacy for weaker adversaries, and lower privacy for stronger adversaries. To address these gaps, we propose a methodology to systematically evaluate the monotonicity of graph privacy metrics, and present preliminary results for the monotonicity of 25 graph privacy metrics

    Technical Privacy Metrics: a Systematic Survey

    Get PDF
    The file attached to this record is the author's final peer reviewed versionThe goal of privacy metrics is to measure the degree of privacy enjoyed by users in a system and the amount of protection offered by privacy-enhancing technologies. In this way, privacy metrics contribute to improving user privacy in the digital world. The diversity and complexity of privacy metrics in the literature makes an informed choice of metrics challenging. As a result, instead of using existing metrics, new metrics are proposed frequently, and privacy studies are often incomparable. In this survey we alleviate these problems by structuring the landscape of privacy metrics. To this end, we explain and discuss a selection of over eighty privacy metrics and introduce categorizations based on the aspect of privacy they measure, their required inputs, and the type of data that needs protection. In addition, we present a method on how to choose privacy metrics based on nine questions that help identify the right privacy metrics for a given scenario, and highlight topics where additional work on privacy metrics is needed. Our survey spans multiple privacy domains and can be understood as a general framework for privacy measurement
    • …
    corecore